由于乳腺癌的发生和死亡率很高,乳房X线照片中检测肿块很重要。在乳房X线照片质量检测中,对成对病变对应的建模特别重要。但是,大多数现有方法构建了相对粗糙的对应关系,并且尚未利用对应的监督。在本文中,我们提出了一个新的基于变压器的框架CL-NET,以端到端的方式学习病变检测和成对对应。在CL-NET中,提出了观察性病变检测器来实现跨视图候选者的动态相互作用,而病变接头则采用通信监督来更准确地指导相互作用过程。这两种设计的组合实现了对乳房X线照片的成对病变对应的精确理解。实验表明,CL-NET在公共DDSM数据集和我们的内部数据集上产生最先进的性能。此外,在低FPI制度中,它的表现优于先前的方法。
translated by 谷歌翻译
本文探讨了管状结构提取任务的点集表示。与传统的掩码表示相比,点集表示享有其灵活性和表示能力,这不会受到固定网格作为掩模的限制。受此启发,我们提出了PointCatter,这是管状结构提取任务的分割模型的替代方法。PointCatter将图像分为散射区域,并对每个散点区域预测点。我们进一步提出了基于贪婪的区域的两分匹配算法,以端到端训练网络。我们在四个公共管状数据集上基准测试了点刻表,并且有关管状结构分割和中心线提取任务的广泛实验证明了我们方法的有效性。代码可在https://github.com/zhangzhao2022/pointscatter上找到。
translated by 谷歌翻译
代码生成旨在从自然语言描述中自动生成代码段。通常,主流代码生成方法依赖大量的配对培训数据,包括自然语言描述和代码。但是,在某些特定领域的情况下,很难为代码生成建立如此大的配对语料库,因为没有直接可用的配对数据,并且需要大量精力来手动编写代码说明来构建高质量的培训数据集。由于培训数据有限,生成模型不能经过良好的训练,并且可能过于拟合,从而使该模型对现实世界的使用不满意。为此,在本文中,我们提出了一种任务增强方法,该方法通过扩展原始的Tranx模型来支持suptoken级代码生成,将域知识通过辅助任务和亚键入tranx模型纳入代码生成模型。为了验证我们提出的方法,我们收集了一个真实的代码生成数据集并在其上进行实验。我们的实验结果表明,亚句级Tranx模型在我们的数据集中优于原始Tranx模型和变压器模型,并且在我们的任务增强方法的帮助下,Subtoken-Tranx的确切匹配精度可显着提高12.75 \%。多个代码类别的模型性能满足了工业系统应用程序的要求。我们提出的方法已由阿里巴巴的\ emph {bizcook}平台采用。据我们所知,这是在工业开发环境中采用的第一个领域代码生成系统。
translated by 谷歌翻译
代码生成的重点是将自然语言(NL)话语自动转换为代码段。序列对树(Seq2Tree)方法,例如Tranx,是为代码生成的,并保证了生成的代码的编译性,该代码的编译性会生成随后的抽象语法树(AST)节点,该节点依赖于AST节点的前提预测。现有的SEQ2TREE方法倾向于同时对待前期预测和后续预测。但是,在AST约束下,SEQ2TREE模型很难基于不正确的先决预测产生正确的后续预测。因此,与后续预测相比,先行预测应该受到更多的关注。为此,在本文中,我们提出了一种有效的方法,称为aptranx(先行优先级Tranx),基于Tranx。 APTRANX包含了先行优先级(AP)损失,该损失通过利用生成的AST节点的位置信息来帮助模型对先行预测的重要性。凭借更好的先行预测和随后的预测,Aptranx显着提高了性能。我们在几个基准数据集上进行了广泛的实验,实验结果证明了我们所提出的方法与最新方法相比的优势和普遍性。
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译
Despite significant progress in object categorization, in recent years, a number of important challenges remain; mainly, the ability to learn from limited labeled data and to recognize object classes within large, potentially open, set of labels. Zero-shot learning is one way of addressing these challenges, but it has only been shown to work with limited sized class vocabularies and typically requires separation between supervised and unsupervised classes, allowing former to inform the latter but not vice versa. We propose the notion of vocabulary-informed learning to alleviate the above mentioned challenges and address problems of supervised, zero-shot, generalized zero-shot and open set recognition using a unified framework. Specifically, we propose a weighted maximum margin framework for semantic manifold-based recognition that incorporates distance constraints from (both supervised and unsupervised) vocabulary atoms. Distance constraints ensure that labeled samples are projected closer to their correct prototypes, in the embedding space, than to others. We illustrate that resulting model shows improvements in supervised, zero-shot, generalized zero-shot, and large open set recognition, with up to 310K class vocabulary on Animal with Attributes and ImageNet datasets.
translated by 谷歌翻译
Advances in computer vision and machine learning techniques have led to significant development in 2D and 3D human pose estimation from RGB cameras, LiDAR, and radars. However, human pose estimation from images is adversely affected by occlusion and lighting, which are common in many scenarios of interest. Radar and LiDAR technologies, on the other hand, need specialized hardware that is expensive and power-intensive. Furthermore, placing these sensors in non-public areas raises significant privacy concerns. To address these limitations, recent research has explored the use of WiFi antennas (1D sensors) for body segmentation and key-point body detection. This paper further expands on the use of the WiFi signal in combination with deep learning architectures, commonly used in computer vision, to estimate dense human pose correspondence. We developed a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions. The results of the study reveal that our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches, by utilizing WiFi signals as the only input. This paves the way for low-cost, broadly accessible, and privacy-preserving algorithms for human sensing.
translated by 谷歌翻译
With the increasing ability of large language models (LLMs), in-context learning (ICL) has become a new paradigm for natural language processing (NLP), where LLMs make predictions only based on contexts augmented with a few training examples. It has been a new trend exploring ICL to evaluate and extrapolate the ability of LLMs. In this paper, we aim to survey and summarize the progress, challenges, and future work in ICL. We first present a formal definition of ICL and clarify its correlation to related studies. Then, we organize and discuss advanced techniques of ICL, including training strategies, prompting strategies, and so on. Finally, we present the challenges of ICL and provide potential directions for further research. We hope our work can encourage more research on uncovering how ICL works and improving ICL in future work.
translated by 谷歌翻译
Designing better deep networks and better reinforcement learning (RL) algorithms are both important for deep RL. This work focuses on the former. Previous methods build the network with several modules like CNN, LSTM and Attention. Recent methods combine the Transformer with these modules for better performance. However, it requires tedious optimization skills to train a network composed of mixed modules, making these methods inconvenient to be used in practice. In this paper, we propose to design \emph{pure Transformer-based networks} for deep RL, aiming at providing off-the-shelf backbones for both the online and offline settings. Specifically, the Transformer in Transformer (TIT) backbone is proposed, which cascades two Transformers in a very natural way: the inner one is used to process a single observation, while the outer one is responsible for processing the observation history; combining both is expected to extract spatial-temporal representations for good decision-making. Experiments show that TIT can achieve satisfactory performance in different settings, consistently.
translated by 谷歌翻译
Recently the deep learning has shown its advantage in representation learning and clustering for time series data. Despite the considerable progress, the existing deep time series clustering approaches mostly seek to train the deep neural network by some instance reconstruction based or cluster distribution based objective, which, however, lack the ability to exploit the sample-wise (or augmentation-wise) contrastive information or even the higher-level (e.g., cluster-level) contrastiveness for learning discriminative and clustering-friendly representations. In light of this, this paper presents a deep temporal contrastive clustering (DTCC) approach, which for the first time, to our knowledge, incorporates the contrastive learning paradigm into the deep time series clustering research. Specifically, with two parallel views generated from the original time series and their augmentations, we utilize two identical auto-encoders to learn the corresponding representations, and in the meantime perform the cluster distribution learning by incorporating a k-means objective. Further, two levels of contrastive learning are simultaneously enforced to capture the instance-level and cluster-level contrastive information, respectively. With the reconstruction loss of the auto-encoder, the cluster distribution loss, and the two levels of contrastive losses jointly optimized, the network architecture is trained in a self-supervised manner and the clustering result can thereby be obtained. Experiments on a variety of time series datasets demonstrate the superiority of our DTCC approach over the state-of-the-art.
translated by 谷歌翻译